186 research outputs found

    Color image segmentation using markov random field models

    Get PDF
    In this thesis, the problem of color image segmentation is address in stochastic framework. The problem is formulated as pixel labelling problem. The pixel labels are estimated using maximum a Posteriori (MAP) criterion. The observed image is viewed as the degraded version of the true labels. The degradation process is assumed to be Gaussian process. The image labels are modeled as Markov Random Field (MRF) model and the Ohta (I1,I2,I3) model is used as the color model

    Area-norm COBRA on Conditional Survival Prediction

    Full text link
    The paper explores a different variation of combined regression strategy to calculate the conditional survival function. We use regression based weak learners to create the proposed ensemble technique. The proposed combined regression strategy uses proximity measure as area between two survival curves. The proposed model shows a construction which ensures that it performs better than the Random Survival Forest. The paper discusses a novel technique to select the most important variable in the combined regression setup. We perform a simulation study to show that our proposition for finding relevance of the variables works quite well. We also use three real-life datasets to illustrate the model

    Integrated Brier Score based Survival Cobra -- A regression based approach

    Full text link
    Recently Goswami et al. \cite{goswami2022concordance} introduced two novel implementations of combined regression strategy to find the conditional survival function. The paper uses regression-based weak learners and provides an alternative version of the combined regression strategy (COBRA) ensemble using the Integrated Brier Score to predict conditional survival function. We create a novel predictor based on a weighted version of all machine predictions taking weights as a specific function of normalized Integrated Brier Score. We use two different norms (Frobenius and Sup norm) to extract the proximity points in the algorithm. Our implementations consider right-censored data too. We illustrate the proposed algorithms through some real-life data analysis.Comment: arXiv admin note: text overlap with arXiv:2209.1191

    Process fault detection and diagnosis of fed-batch plant using multiway principal component analysis

    Get PDF
    With the advent of new technologies, process plants whether it be continuous or batch process plants are getting complex. And modelling them mathematical is a herculean task. Model based fault detection and diagnosis mainly depend on explicit mathematical model of process plant, which is the biggest problem with the model based approach. Whereas with process history based there is no need of explicit model of the plant. It only depends on the data of previous runs. With the advancement in electronic instrumentation, we can get large amount of data electronically. But the crude data we get is not useful for taking any decision. So we need develop techniques which can convey us the information about the ongoing process. So we take the help of multivariate statistics such as Principal Component Analysis(PCA) or Partial Least Squares(PLS). These methods exploits the facts such as the process data are highly correlated and have large dimensions, due to which we can compress them to lower dimension space. By examining the data in the lower dimensional space we can monitor the plant and can detect fault

    Procedural generation of features for volumetric terrains using a rule-based approach.

    Get PDF
    Terrain generation is a fundamental requirement of many computer graphics simulations, including computer games, flight simulators and environments in feature films. Volumetric representations of 3D terrains can create rich features that are either impossible or very difficult to construct in other forms of terrain generation techniques, such as overhangs, arches and caves. While a considerable amount of literature has focused on procedural generation of terrains using heightmap-based implementations, there is little research found on procedural terrains utilising a voxel-based approach. This thesis contributes two methods to procedurally generate features for terrains that utilise a volumetric representation. The first method is a novel grammar-based approach to generate overhangs and caves from a set of rules. This voxel grammar provides a flexible and intuitive method of manipulating voxels from a set of symbol/transform pairs that can provide a variety of different feature shapes and sizes. The second method implements three parametric functions for overhangs, caves and arches. This generates a set of voxels procedurally based on the parameters of a function selected by the user. A small set of parameters for each generator function yields a widely varied set of features and provides the user with a high degree of expressivity. In order to analyse the expressivity, this thesis’ third contribution is an original method of quantitatively valuing a result of a generator function. This research is a collaboration with Sony Interactive Entertainment and their proprietary game engine PhyreEngineTM. The methods presented have been integrated into the engine’s terrain system. Thus, there is a focus on real-time performance so as to be feasible for game developers to use while adhering to strict sub-second frame times of modern computer games

    Covariance Estimation and Principal Component Analysis for Mixed-Type Functional Data with application to mHealth in Mood Disorders

    Full text link
    Mobile digital health (mHealth) studies often collect multiple within-day self-reported assessments of participants' behaviour and health. Indexed by time of day, these assessments can be treated as functional observations of continuous, truncated, ordinal, and binary type. We develop covariance estimation and principal component analysis for mixed-type functional data like that. We propose a semiparametric Gaussian copula model that assumes a generalized latent non-paranormal process generating observed mixed-type functional data and defining temporal dependence via a latent covariance. The smooth estimate of latent covariance is constructed via Kendall's Tau bridging method that incorporates smoothness within the bridging step. The approach is then extended with methods for handling both dense and sparse sampling designs, calculating subject-specific latent representations of observed data, latent principal components and principal component scores. Importantly, the proposed framework handles all four mixed types in a unified way. Simulation studies show a competitive performance of the proposed method under both dense and sparse sampling designs. The method is applied to data from 497 participants of National Institute of Mental Health Family Study of the Mood Disorder Spectrum to characterize the differences in within-day temporal patterns of mood in individuals with the major mood disorder subtypes including Major Depressive Disorder, and Type 1 and 2 Bipolar Disorder
    corecore